human ability
A Cross-Cultural Assessment of Human Ability to Detect LLM-Generated Fake News about South Africa
Schlippe, Tim, Wölfel, Matthias, Mabokela, Koena Ronny
This study investigates how cultural proximity affects the ability to detect AI-generated fake news by comparing South African participants with those from other nationalities. As large language models increasingly enable the creation of sophisticated fake news, understanding human detection capabilities becomes crucial, particularly across different cultural contexts. We conducted a survey where 89 participants (56 South Africans, 33 from other nationalities) evaluated 10 true South African news articles and 10 AI-generated fake versions. Results reveal an asymmetric pattern: South Africans demonstrated superior performance in detecting true news about their country (40% deviation from ideal rating) compared to other participants (52%), but performed worse at identifying fake news (62% vs. 55%). This difference may reflect South Africans' higher overall trust in news sources. Our analysis further shows that South Africans relied more on content knowledge and contextual understanding when judging credibility, while participants from other countries emphasised formal linguistic features such as grammar and structure. Overall, the deviation from ideal rating was similar between groups (51% vs. 53%), suggesting that cultural familiarity appears to aid verification of authentic information but may also introduce bias when evaluating fabricated content. These insights contribute to understanding cross-cultural dimensions of misinformation detection and inform strategies for combating AI-generated fake news in increasingly globalised information ecosystems where content crosses cultural and geographical boundaries.
A perishable ability? The future of writing in the face of generative artificial intelligence
The 2020s have been witnessing a very significant advance in the development of generative artificial intelligence tools, including text generation systems based on large language models. These tools have been increasingly used to generate texts in the most diverse domains -- from technical texts to literary texts --, which might eventually lead to a lower volume of written text production by humans. This article discusses the possibility of a future in which human beings will have lost or significantly decreased their ability to write due to the outsourcing of this activity to machines. This possibility parallels the loss of the ability to write in other moments of human history, such as during the so-called Greek Dark Ages (approx. 1200 BCE - 800 BCE).
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- South America > Brazil > Minas Gerais > Belo Horizonte (0.04)
- Europe > Middle East > Cyprus (0.04)
- (2 more...)
- Instructional Material (0.93)
- Research Report (0.84)
- Information Technology > Artificial Intelligence > Natural Language > Generation (0.88)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.73)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.36)
Neuralink's first female patient reveals shocking effect of brain chip
A woman who has been fully paralyzed for the last 20 years has regained the ability to use a computer, marking a world-first for Elon Musk's company, Neuralink. Thanks to Neuralink's revolutionary implant, Audrey Crews revealed on X how she was able to write her name on a computer screen. 'I tried writing my name for the first time in 20 years. Lol,' Crews posted on X while showing the world her first attempt at a signature since 2005. Using the brain-computer interface (BCI), the implant recipient chose a purple-colored cursor pen to write the name'Audrey' on the screen in cursive script.
Can the capability of Large Language Models be described by human ability? A Meta Study
Zan, Mingrui, Zhang, Yunquan, Zhang, Boyang, Liu, Fangming, Cheng, Daning
Users of Large Language Models (LLMs) often perceive these models as intelligent entities with human-like capabilities. However, the extent to which LLMs' capabilities truly approximate human abilities remains a topic of debate. In this paper, to characterize the capabilities of LLMs in relation to human capabilities, we collected performance data from over 80 models across 37 evaluation benchmarks. The evaluation benchmarks are categorized into 6 primary abilities and 11 sub-abilities in human aspect. Then, we then clustered the performance rankings into several categories and compared these clustering results with classifications based on human ability aspects. Our findings lead to the following conclusions: 1. We have confirmed that certain capabilities of LLMs with fewer than 10 billion parameters can indeed be described using human ability metrics; 2. While some abilities are considered interrelated in humans, they appear nearly uncorrelated in LLMs; 3. The capabilities possessed by LLMs vary significantly with the parameter scale of the model.
On Evaluating Explanation Utility for Human-AI Decision Making in NLP
Chaleshtori, Fateme Hashemi, Ghosal, Atreya, Gill, Alexander, Bambroo, Purbid, Marasović, Ana
Is explainability a false promise? This debate has emerged from the insufficient evidence that explanations aid people in situations they are introduced for. More human-centered, application-grounded evaluations of explanations are needed to settle this. Yet, with no established guidelines for such studies in NLP, researchers accustomed to standardized proxy evaluations must discover appropriate measurements, tasks, datasets, and sensible models for human-AI teams in their studies. To help with this, we first review fitting existing metrics. We then establish requirements for datasets to be suitable for application-grounded evaluations. Among over 50 datasets available for explainability research in NLP, we find that 4 meet our criteria. By finetuning Flan-T5-3B, we demonstrate the importance of reassessing the state of the art to form and study human-AI teams. Finally, we present the exemplar studies of human-AI decision-making for one of the identified suitable tasks -- verifying the correctness of a legal claim given a contract.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Belgium > Brussels-Capital Region > Brussels (0.04)
- (33 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Education > Educational Setting (0.92)
- Education > Curriculum > Subject-Specific Education (0.46)
Country star Lee Greenwood doubts AI will 'take over human input'
The "America's Got Talent" judge tells Fox News Digital why he doesn't like AI technology in songwriting. Country music legend Lee Greenwood knows the importance of creating from the heart. The "God Bless the USA" singer has more than half a century of experience in the entertainment world, with dozens of hit songs and albums under his belt. When it comes to artificial intelligence and figuring out if AI has a place in the ever-changing landscape of the music industry, Greenwood took a cue from the past. "I approach this just like when guitar players first got a wah-wah pedal and the chorus on an organ – it's like, it's kind of a new thing," he told Fox News Digital.
- North America > United States > Tennessee (0.06)
- North America > United States > Nevada (0.06)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Will Artificial Intelligence Be Able to Surpass Humans in Every Aspect?
Artificial intelligence has made tremendous strides in recent decades. Voice recognition, automatic language translation, and complex decision-making are just a few of the human-exclusive tasks that can now be done by machines. However, artificial intelligence is still constrained in numerous ways. Even though machines are very good at processing a lot of data and making decisions based on patterns, they still can't compare to how creative and intuitive people are. Scientists and researchers, on the other hand, can come up with novel concepts that alter the world, while artists and writers can produce works of literature and art that are one-of-a-kind and illuminating.
Up to 80 PERCENT of US jobs could be impacted by ChatGPT-like AI in coming years, study warns
ChatGPT-like AI systems will impact 80 percent of US jobs, with personal financial advisors and brokers, insurers and data processors at the top of the list. The warning comes from researchers at OpenAI and the University of Pennsylvania, who investigated whether the technology could complete tasks faster than humans. The team found that about 15 percent of all worker tasks could be completed significantly faster by AI and with the same level of quality. The warning comes from researchers at OpenAI and the University of Pennsylvania, who investigated whether the technology could complete tasks faster than humans. 'Exposure' means how much a job will be impacted by AI Fears of software eliminating human jobs have recently made waves across the globe following the launch of ChatGPT in November and its ability to perform eerily-human professional tasks such as writing emails and resumes.
- Banking & Finance (0.72)
- Government > Regional Government (0.32)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.51)
Tech guru Jaron Lanier: 'The danger isn't that AI destroys us. It's that it drives us insane'
Jaron Lanier, the godfather of virtual reality and the sage of all things web, is nicknamed the Dismal Optimist. And there has never been a time we've needed his dismal optimism more. It's hard to read an article or listen to a podcast these days without doomsayers telling us we've pushed our luck with artificial intelligence, our hubris is coming back to haunt us and robots are taking over the world. There are stories of chatbots becoming best friends, declaring their love, trying to disrupt stable marriages, and threatening chaos on a global scale. Is AI really capable of outsmarting us and taking over the world? Well, your question makes no sense," Lanier says in his gentle sing-song voice. "You've just used the set of terms that to me are fictions.
- North America > United States > New Mexico (0.05)
- North America > United States > California (0.05)
- North America > United States > New York (0.04)
The 20 jobs most at risk as the AI boom continues: Is YOUR occupation on the list?
The rise of artificial intelligence is set to boost economic growth, but it is also poised to take over the job market - and a new study reveals the 20 most occupations at risk. A team of researchers led by Princeton University conducted an AI occupational exposure methodology by linking 10 AI-powered applications, such as language modeling, to 52 human abilities to understand if any closely relate. The results showed that telemarketers, teachers, school psychologists and judges are among the highest at risk. Fears of software eliminating human jobs have recently made waves across the globe following the launch of ChatGPT and its ability to perform eerily-human professional tasks such as writing emails and resumes. 'The effect of AI on work will likely be multi-faceted.
- North America > United States (0.49)
- North America > Canada > Ontario > Toronto (0.15)
- South America > Colombia (0.06)
- Government > Regional Government (0.49)
- Banking & Finance > Economy (0.35)